14 research outputs found

    Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm

    Full text link
    Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes is based on Bounded Minimum Distance (BMD) decoders with an erasure option. Such decoders have error/erasure tradeoff factor L=2, which means that an error is twice as expensive as an erasure in terms of the code's minimum distance. The Guruswami-Sudan (GS) list decoder can be considered as state of the art in algebraic decoding of RS codes. Besides an erasure option, it allows to adjust L to values in the range 1<L<=2. Based on previous work, we provide formulae which allow to optimally (in terms of residual codeword error probability) exploit the erasure option of decoders with arbitrary L, if the decoder can be used z>=1 times. We show that BMD decoders with z_BMD decoding trials can result in lower residual codeword error probability than GS decoders with z_GS trials, if z_BMD is only slightly larger than z_GS. This is of practical interest since BMD decoders generally have lower computational complexity than GS decoders.Comment: Accepted for the 2011 IEEE International Symposium on Information Theory, St. Petersburg, Russia, July 31 - August 05, 2011. 5 pages, 2 figure

    Optimal Thresholds for GMD Decoding with (L+1)/L-extended Bounded Distance Decoders

    Full text link
    We investigate threshold-based multi-trial decoding of concatenated codes with an inner Maximum-Likelihood decoder and an outer error/erasure (L+1)/L-extended Bounded Distance decoder, i.e. a decoder which corrects e errors and t erasures if e(L+1)/L + t <= d - 1, where d is the minimum distance of the outer code and L is a positive integer. This is a generalization of Forney's GMD decoding, which was considered only for L = 1, i.e. outer Bounded Minimum Distance decoding. One important example for (L+1)/L-extended Bounded Distance decoders is decoding of L-Interleaved Reed-Solomon codes. Our main contribution is a threshold location formula, which allows to optimally erase unreliable inner decoding results, for a given number of decoding trials and parameter L. Thereby, the term optimal means that the residual codeword error probability of the concatenated code is minimized. We give an estimation of this probability for any number of decoding trials.Comment: Accepted for the 2010 IEEE International Symposium on Information Theory, Austin, TX, USA, June 13 - 18, 2010. 5 pages, 2 figure

    Woven Graph Codes: Asymptotic Performances and Examples

    Full text link
    Constructions of woven graph codes based on constituent block and convolutional codes are studied. It is shown that within the random ensemble of such codes based on ss-partite, ss-uniform hypergraphs, where ss depends only on the code rate, there exist codes satisfying the Varshamov-Gilbert (VG) and the Costello lower bound on the minimum distance and the free distance, respectively. A connection between regular bipartite graphs and tailbiting codes is shown. Some examples of woven graph codes are presented. Among them an example of a rate Rwg=1/3R_{\rm wg}=1/3 woven graph code with dfree=32d_{\rm free}=32 based on Heawood's bipartite graph and containing n=7n=7 constituent rate Rc=2/3R^{c}=2/3 convolutional codes with overall constraint lengths νc=5\nu^{c}=5 is given. An encoding procedure for woven graph codes with complexity proportional to the number of constituent codes and their overall constraint length νc\nu^{c} is presented.Comment: Submitted to IEEE Trans. Inform. Theor

    Unequal error protection explained by state-transition graphs

    No full text
    In this paper unequal error-correcting capabilities of convolutional codes are studied. State-transition graphs are used for investigating the error-correcting capabilities for specific inputs or outputs of given convolutional encoding matrices. Active burst input- and output-distances are defined and shown to be generalizations of the active burst distance. These new distances provide better estimates of the error-correction capability, especially when we communicate close to or above the channel capacity and they are connected to the corresponding free input- and output-distances via lower bound

    On the error correcting capability of convolutional codes: Old and new

    No full text
    A brief introduction to convolutional coding is given. The active distances are reviewed and shown to be an important tool for analyzing the error correcting capability of maximum-likelihood (ML) decoding of convolutional codes when communicating over a binary symmetric channel (BSC). Due to memory and delay constraints in practical coding schemes, convolutional codes often are either terminated or decoded by a window decoder. When a window decoder is used, the convolutional code sequence is not terminated; instead, the window decoder estimates the information digits after receiving a finite number of noise corrupted code symbols, thereby keeping the decoding delay short. A characterization of the error correcting capability of window decoded convolutional codes is given by using the active distances

    On unequal error protection for code symbols via active distances

    No full text
    This paper deals with unequal error protection for code symbols of convo-lutional codes. The unequal error protection can be achieved due to different distanceproperties of output sequences and is sometimes described in terms of the correspondingeffective free distances. However, in many situations the effective free distance does notprovide enough information on the error-correcting capability. We consider unequal errorprotection for code symbols via active distances, which gives us more detailed information.We also establish a connection to the effective free distances. Furthermore, we show thatwoven convolutional codes with inner warp are naturally suited for unequal error protectionof code symbols

    On the error detecting and erasure correcting capabilities of convolutional codes

    No full text
    A convolutional code can be used to detect or correct infinite sequences of errors or to correct infinite sequences of erasures. First, erasure correction is shown to be related to error detection, as well as error detection to error correction. Next, the active burst distance is exploited, and various bounds on erasure correction, error detection, and error correction are obtained for convolutional codes. These bounds are illustrated by examples

    On the burst error detecting and erasure correcting capabilities of convolutional codes

    No full text
    In this paper it is shown how a convolutional code can be used either to detect errors or to correct erasures. The erasure correction is related to the error detection. Using the active burst distance lower bounds on the error and erasure correction as well as error detection are obtained for convolutional codes
    corecore